Goto

Collaborating Authors

 Tysons Corner


Energy-Aware Deep Learning on Resource-Constrained Hardware

Millar, Josh, Haddadi, Hamed, Madhavapeddy, Anil

arXiv.org Artificial Intelligence

The use of deep learning (DL) on Internet of Things (IoT) and mobile devices offers numerous advantages over cloud-based processing. However, such devices face substantial energy constraints to prolong battery-life, or may even operate intermittently via energy-harvesting. Consequently, \textit{energy-aware} approaches for optimizing DL inference and training on such resource-constrained devices have garnered recent interest. We present an overview of such approaches, outlining their methodologies, implications for energy consumption and system-level efficiency, and their limitations in terms of supported network types, hardware platforms, and application scenarios. We hope our review offers a clear synthesis of the evolving energy-aware DL landscape and serves as a foundation for future research in energy-constrained computing.


Tampa tech companies TheIncLab, Abacode add more jobs

#artificialintelligence

Abacode expects to hire eight to 10 more staffers during the next nine months. A data visualization and artificial intelligence company, Ybor City's TheIncLab outgrew its 400-square-feet space at the Undercroft and relocated to 8,000-square-feet offices in El Pasaje building earlier this year. It now plans to hire 40 next year and pay an average wage of $85,000 annually. Positions will be in software engineering, emerging technology, data science, 3D design, and artificial intelligence. "All new positions will be in Tampa," says Adriana Avakian, CEO of TheIncLab.


Fast Convolution based on Winograd Minimum Filtering: Introduction and Development

Tong, Gan, Huang, Libo

arXiv.org Artificial Intelligence

Convolutional Neural Network (CNN) has been widely used in various fields and played an important role. Convolution operators are the fundamental component of convolutional neural networks, and it is also the most time-consuming part of network training and inference. In recent years, researchers have proposed several fast convolution algorithms including FFT and Winograd. Among them, Winograd convolution significantly reduces the multiplication operations in convolution, and it also takes up less memory space than FFT convolution. Therefore, Winograd convolution has quickly become the first choice for fast convolution implementation within a few years. At present, there is no systematic summary of the convolution algorithm. This article aims to fill this gap and provide detailed references for follow-up researchers. This article summarizes the development of Winograd convolution from the three aspects of algorithm expansion, algorithm optimization, implementation, and application, and finally makes a simple outlook on the possible future directions.


A Review of Recent Advances of Binary Neural Networks for Edge Computing

Zhao, Wenyu, Ma, Teli, Gong, Xuan, Zhang, Baochang, Doermann, David

arXiv.org Artificial Intelligence

Abstract--Edge computing is promising to become one of the next hottest topics in artificial intelligence because it benefits various evolving domains such as real-time unmanned aerial systems, industrial applications, and the demand for privacy protection. This paper reviews recent advances on binary neural network (BNN) and 1-bit CNN technologies that are well suitable for front-end, edge-based computing. We introduce and summarize existing work and classify them based on gradient approximation, quantization, architecture, loss functions, optimization method, and binary neural architecture search. We also introduce applications in the areas of computer vision and speech recognition and discuss future applications for edge computing. ITH the rapid development of information technology, cloud computing with centralized data processing cannot the performance of binary neural networks. To better review meet the needs of applications that require the processing these methods, we six aspects including gradient approximation, of massive amounts of data, nor can they be effectively used quantization, structural design, loss design, optimization, when privacy requires the data to remain at the source. Finally, we will also edge computing has become an alternative to handle the data review object detection, object tracking, and audio analysis from front-end or embedded devices.


Unison Introduces Latest Machine Learning Data Validation App

#artificialintelligence

Unison Inc., the leading provider of software and insight to government agencies, program offices, and contractors, today introduced the Data Validation Engine to support the modernization of the federal acquisition lifecycle. This transformative app utilizes machine learning, an application of Artificial Intelligence (AI), to automate configurable rules for improved data quality and accuracy. "We launched the Data Validation Engine with acquisition modernization as a top priority to put the power in the hands of federal agencies to drive compliance with their policies and procedures," said Reid Jackson, Unison President and CEO. "At Unison, we bring real-world applications of leading-edge technical innovations to the federal acquisition and contractor workforces. This app is just the latest of several new product releases built on our Insight Platform using the latest AI and RPA technologies."


DeepAISE -- An End-to-End Development and Deployment of a Recurrent Neural Survival Model for Early Prediction of Sepsis

Shashikumar, Supreeth P., Josef, Christopher, Sharma, Ashish, Nemati, Shamim

arXiv.org Machine Learning

Abstract: Sepsis, a dysregulated immune system response to infection, is among the leading causes of morbidity, mortality, and cost overruns in the Intensive Care Unit (ICU). Ear ly prediction of sepsis can improve situational awareness amongst clinicians and facilitate timely, protective interventions. While the application of predictive analytics in ICU patients has shown early promising results, much of the work has been encumbe red by high false - alarm rates. Efforts to improve specificity have been limited by several factors, most notably the difficulty of labeling sepsis onset time and the low prevalence of septic - events in the ICU. We show that by coupling a clinical criterion for defining sepsis onset time with a treatment policy (e.g., initiation of antibiotics within one hour of meeting the criterion), one may rank the relative utility of various criteria through offline policy evaluation. Given the optimal criterion, DeepAISE automatically learns predictive features related to higher - order interactions and temporal patterns among clinic al risk factors that maximize the data likelihood of observed time to septic events. DeepAISE has been incorporated into a clinical workflow, which provides real - time hourly sepsis risk scores. A comparative study of four baseline models indicates that Dee pAISE produces the most accurate predictions (AUC 0.90 and 0.87) and the lowest false alarm rates (FAR 0.20 and 0.26) in two separate cohorts (internal and external, respectively), while simultaneously producing interpretable representations of the clinica l time series and risk factors. Introduction Sepsis is a syndromic, life - threatening condition that arises when the body's response to infection injures its own internal organs (1) . Though the condition lacks the same public notoriety as other conditions like heart attacks, 6% of all hospitalized patients in the U nited S tates carry a primary diagnosis of sepsis as compared to 2.5% for the latter (2) . When all hospital deaths are ultimately considered, nearly 35% are attributable to sepsis (2) . This condition stands in stark contrast to heart attacks which have a mortality rate of 2.7 - 9.6% and only cost the US $12.1 billion ann ually, roughly half of the cost of sepsis (3) .


On the use of Deep Autoencoders for Efficient Embedded Reinforcement Learning

Prakash, Bharat, Horton, Mark, Waytowich, Nicholas R., Hairston, William David, Oates, Tim, Mohsenin, Tinoosh

arXiv.org Artificial Intelligence

In autonomous embedded systems, it is often vital to reduce the amount of actions taken in the real world and energy required to learn a policy. Training reinforcement learning agents from high dimensional image representations can be very expensive and time consuming. Autoencoders are deep neural network used to compress high dimensional data such as pixelated images into small latent representations. This compression model is vital to efficiently learn policies, especially when learning on embedded systems. We have implemented this model on the NVIDIA Jetson TX2 embedded GPU, and evaluated the power consumption, throughput, and energy consumption of the autoencoders for various CPU/GPU core combinations, frequencies, and model parameters. Additionally, we have shown the reconstructions generated by the autoencoder to analyze the quality of the generated compressed representation and also the performance of the reinforcement learning agent. Finally, we have presented an assessment of the viability of training these models on embedded systems and their usefulness in developing autonomous policies. Using autoencoders, we were able to achieve 4-5 $\times$ improved performance compared to a baseline RL agent with a convolutional feature extractor, while using less than 2W of power.


The Rise of the Robot Reporter

#artificialintelligence

"The financial markets are ahead of others in this," said John Micklethwait, the editor in chief of Bloomberg. In addition to covering company earnings for Bloomberg, robot reporters have been prolific producers of articles on minor league baseball for The Associated Press, high school football for The Washington Post and earthquakes for The Los Angeles Times. MANCHESTER, N.H. (AP) -- Jonathan Davis hit for the cycle, as the New Hampshire Fisher Cats topped the Portland Sea Dogs 10-3 on Tuesday. Last week, The Guardian's Australia edition published its first machine-assisted article, an account of annual political donations to the country's political parties. And Forbes recently announced that it was testing a tool called Bertie to provide reporters with rough drafts and story templates.


The Rise of the Robot Reporter

#artificialintelligence

"The financial markets are ahead of others in this," said John Micklethwait, the editor in chief of Bloomberg. In addition to covering company earnings for Bloomberg, robot reporters have been prolific producers of articles on minor league baseball for The Associated Press, high school football for The Washington Post and earthquakes for The Los Angeles Times. MANCHESTER, N.H. (AP) -- Jonathan Davis hit for the cycle, as the New Hampshire Fisher Cats topped the Portland Sea Dogs 10-3 on Tuesday. Last week, The Guardian's Australia edition published its first machine-assisted article, an account of annual political donations to the country's political parties. And Forbes recently announced that it was testing a tool called Birdie to provide reporters with rough drafts and story templates.


How Businesses Are Using AI and Machine Learning to Leverage Events

#artificialintelligence

You should attend Adweek's Elevate: AI summit March 6 in New York. As professionals across disciplines in the healthcare, retail and financial services industries embrace data-driven decision-making and begin to experience the power of precision available at their fingertips, more marketers are turning to artificial intelligence to improve the efficacy of many parts of the sales cycle. For buyers, the purchase path has been transformed by the large amounts of data available and the technology that has been created to make sense of it. What used to involve manual searches and gathering information from word of mouth and online can now be delivered by algorithm. We're in the midst of a shift that is subtle, yet universal, affecting all facets of how businesses operate and how audiences make choices about what to buy or engage with.